53 research outputs found

    Girth Alternative for HNN Extensions

    Full text link
    We prove the Girth Alternative for a sub-class of the HNN extensions of finitely generated groups. We also produce counterexamples to show that beyond our class, the alternative fails in general.Comment: We extended one of the main results proving the Girth Alternative for HNN extensions of word hyperbolic groups (instead of HNN extensions of free groups). Several typing errors corrected, remarks adde

    Proof-Carrying Data from Accumulation Schemes

    Get PDF
    Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme. Bowe, Grigg, and Hopwood (ePrint 2019/1021) outlined a novel approach to recursive composition, and applied it to a particular SNARK construction which does *not* have a sublinear-time verifier. However, they omit details about this approach and do not prove that it satisfies any security property. Nonetheless, schemes based on their ideas have already been implemented in software. In this work we present a collection of results that establish the theoretical foundations for a generalization of the above approach. We define an *accumulation scheme* for a non-interactive argument, and show that this suffices to construct PCD, even if the argument itself does not have a sublinear-time verifier. Moreover we give constructions of accumulation schemes for SNARKs, which yield PCD schemes with novel efficiency and security features

    Proofs for Inner Pairing Products and Applications

    Get PDF
    We present a generalized inner product argument and demonstrate its applications to pairing-based languages. We apply our generalized argument to proving that an inner pairing product is correctly evaluated with respect to committed vectors of nn source group elements. With a structured reference string (SRS), we achieve a logarithmic-time verifier whose work is dominated by 6logn6 \log n target group exponentiations. Proofs are of size 6logn6 \log n target group elements, computed using 6n6n pairings and 4n4n exponentiations in each source group. We apply our inner product arguments to build the first polynomial commitment scheme with succinct (logarithmic) verification, O(d)O(\sqrt{d}) prover complexity for degree dd polynomials (not including the cost to evaluate the polynomial), and a CRS of size O(d)O(\sqrt{d}). Concretely, this means that for d=228d=2^{28}, producing an evaluation proof in our protocol is 76×76\times faster than doing so in the KZG commitment scheme, and the CRS in our protocol is 1,000×1,000\times smaller: 1313MB vs 1313GB for KZG. This gap only grows as the degree increases. Our polynomial commitment scheme is applicable to both univariate and bivariate polynomials. As a second application, we introduce an argument for aggregating nn Groth16\mathsf{Groth16} zkSNARKs into an O(logn)O(\log n) sized proof. Our protocol is significantly more efficient than aggregating these SNARKs via recursive composition (BCGMMW20): we can aggregate about 130,000130,000 proofs in 2525min, while in the same time recursive composition aggregates just 9090 proofs. Finally, we show how to apply our aggregation protocol to construct a low-memory SNARK for machine computations. For a computation that requires time TT and space SS, our SNARK produces proofs in space O~(S+T)\tilde{\mathcal{O}}(S+T), which is significantly more space efficient than a monolithic SNARK, which requires space O~(ST)\tilde{\mathcal{O}}(S \cdot T)

    KNOWLEDGE, ATTITUDE AND AWARENESS OF MEDICAL AND PARAMEDICAL STUDENTS TOWARDS COVID-19 BOOSTER VACCINATION IN A TERTIARY CARE TEACHING HOSPITAL: A SURVEY BASED CROSS-SECTIONAL STUDY.

    Get PDF
    Background: Covid-19 booster vaccination was launched in India on 15 July 2022. Medical and paramedical students play a pivotal role in motivating the general public in a given locality towards a Nation's vaccination drive. The present study aimed to evaluate the student's perspective towards the COVID-19 booster vaccination. Objective: To assess medical and paramedical students' knowledge, attitude, and awareness of COVID-19 booster vaccination. Method: A cross-sectional study was carried out between 14 August 2022 to 12 September 2022 among medical and paramedical students through an online survey questionnaire. The data obtained was tabulated in Microsoft Excel. Study variables were expressed as frequencies/ percentages and graphically represented. Result: Our study revealed that MBBS (99.5%), Nursing (98.6%), and DMLT (94.8 %) students have good knowledge about the availability of booster vaccination. 97.4% of MBBS, 100% of nursing, and 90.9% of DMLT students want to motivate the general population towards immunization. At the same time 29.1% MBBS, 54.1% nursing, and 24.7% DMLT students were apprehensive about possible adverse effects of the booster vaccination. 56.7% of MBBS, 27% of nursing, and 48.1 % of DMLT students are unaware of the safety of booster doses in pregnancy and lactation. Conclusion: Awareness of booster vaccination was found to be adequate among the majority of participants. Most were confident with regard to motivating the general public towards vaccination. However, the hesitancy for the same observed towards the vulnerable population could be attributed to the paucity of information about the long-term safety, and efficacy of the booster vaccination. Recommendation: Messaging around boosters and vaccines needs to emphasize they are safe and convenient to take and that both are important

    Delphi: A Cryptographic Inference Service for Neural Networks

    Get PDF
    Many companies provide neural network prediction services to users for a wide range of applications. However, current prediction systems compromise one party\u27s privacy: either the user has to send sensitive inputs to the service provider for classification, or the service provider must store its proprietary neural networks on the user\u27s device. The former harms the personal privacy of the user, while the latter reveals the service provider\u27s proprietary model. We design, implement, and evaluate Delphi, a secure prediction system that allows two parties to execute neural network inference without revealing either party\u27s data. Delphi approaches the problem by simultaneously co-designing cryptography and machine learning. We first design a hybrid cryptographic protocol that improves upon the communication and computation costs over prior work. Second, we develop a planner that automatically generates neural network architecture configurations that navigate the performance-accuracy trade-offs of our hybrid protocol. Together, these techniques allow us to achieve a 22x improvement in online prediction latency compared to the state-of-the-art prior work

    Spontaneous Polarization in an Ultrathin Improper-Ferroelectric/Dielectric Bilayer in a Capacitor Structure at Cryogenic Temperatures

    Get PDF
    To determine the effect of depolarization and the critical thickness in improper-ferroelectric hexagonal-ferrite thin films, we investigate the polarization switching of a ferroelectric/dielectric bilayer in capacitor structures at 20 K. Experimentally, we show that the spontaneous polarization persists throughout the studied thickness range (3 to 80 unit cell), even with a thick (10-nm) dielectric layer, suggesting no practical thickness limit for applications. By fitting the effect of depolarization using the phenomenological theory, we show that the spontaneous polarization remains finite when the thickness of the ferroelectric layer approaches zero, providing a hint for the absence of critical thickness. We also find that the interfacial effects limit the multidomain formation and govern the polarization switching mechanisms

    IndicNLG Benchmark: Multilingual Datasets for Diverse NLG Tasks in Indic Languages

    Full text link
    Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages. In this paper, we present the IndicNLG Benchmark, a collection of datasets for benchmarking NLG for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes, news headline generation, sentence summarization, paraphrase generation and, question generation. We describe the created datasets and use them to benchmark the performance of several monolingual and multilingual baselines that leverage pre-trained sequence-to-sequence models. Our results exhibit the strong performance of multilingual language-specific pre-trained models, and the utility of models trained on our dataset for other related NLG tasks. Our dataset creation methods can be easily applied to modest-resource languages as they involve simple steps such as scraping news articles and Wikipedia infoboxes, light cleaning, and pivoting through machine translation data. To the best of our knowledge, the IndicNLG Benchmark is the first NLG benchmark for Indic languages and the most diverse multilingual NLG dataset, with approximately 8M examples across 5 tasks and 11 languages. The datasets and models are publicly available at https://ai4bharat.iitm.ac.in/indicnlg-suite.Comment: Accepted at EMNLP 202
    corecore